Cybernetic Revolt
   HOME

TheInfoList



OR:

An AI takeover is a hypothetical scenario in which an
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech re ...
(AI) becomes the dominant form of intelligence on Earth, as
computer program A computer program is a sequence or set of instructions in a programming language for a computer to execute. Computer programs are one component of software, which also includes documentation and other intangible components. A computer program ...
s or
robot A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically. A robot can be guided by an external control device, or the control may be embedded within. Robots may be c ...
s effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a
superintelligent AI A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language ...
, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and
Elon Musk Elon Reeve Musk ( ; born June 28, 1971) is a business magnate and investor. He is the founder, CEO and chief engineer of SpaceX; angel investor, CEO and product architect of Tesla, Inc.; owner and CEO of Twitter, Inc.; founder of The Bori ...
, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.


Types


Automation of the economy

The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of
robotics Robotics is an interdisciplinary branch of computer science and engineering. Robotics involves design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrat ...
and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis. Many small and medium size businesses may also be driven out of business if they will not be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.


Technologies that may displace workers

AI technologies have been widely adopted in recent years, and this trend will only continue to gain popularity given the digital transformation efforts from companies across the world. While these technologies have replaced many traditional workers, they also create new opportunities. Industries that are most susceptible to experience AI takeover include transportation, retail, and military. AI military technologies, for example, allow soldiers to work remotely without any risk of injury. Author Dave Bond argues that as AI technologies continue to develop and expand, the relationship between humans and robots will change; they will become closely integrated in several aspects of life. Overall, it is safe to assume that AI will displace some workers while creating opportunities for new jobs in other sectors, especially in fields where tasks are repeatable.


Computer-integrated manufacturing

Computer-integrated manufacturing Computer-integrated manufacturing (CIM) is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each part. Manufacturing can be faster a ...
is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.


White-collar machines

The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.


Autonomous cars

An
autonomous car A self-driving car, also known as an autonomous car, driver-less car, or robotic car (robo-car), is a car that is capable of traveling without human input.Xie, S.; Hu, J.; Bhowmick, P.; Ding, Z.; Arvin, F.,Distributed Motion Planning for S ...
is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in
Tempe, Arizona , settlement_type = City , named_for = Vale of Tempe , image_skyline = Tempeskyline3.jpg , imagesize = 260px , image_caption = Tempe skyline as se ...
by an
Uber Uber Technologies, Inc. (Uber), based in San Francisco, provides mobility as a service, ride-hailing (allowing users to book a car and driver to transport them in a way similar to a taxi), food delivery (Uber Eats and Postmates), package ...
self-driving car.


Eradication

Scientists such as Stephen Hawking are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains". Scholars like Nick Bostrom debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the same ''emotional'' desire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.


In fiction

AI takeover is a common theme in
science fiction Science fiction (sometimes shortened to Sci-Fi or SF) is a genre of speculative fiction which typically deals with imaginative and futuristic concepts such as advanced science and technology, space exploration, time travel, parallel unive ...
. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals. The idea is seen in
Karel Čapek Karel Čapek (; 9 January 1890 – 25 December 1938) was a Czech writer, playwright and critic. He has become best known for his science fiction, including his novel '' War with the Newts'' (1936) and play '' R.U.R.'' (''Rossum's Universal ...
's ''
R.U.R. ''R.U.R.'' is a 1920 science-fiction play by the Czech writer Karel Čapek. "R.U.R." stands for (Rossum's Universal Robots, a phrase that has been used as a subtitle in English versions). The play had its world premiere on 2 January 1921 in H ...
'', which introduced the word ''robot'' to the global lexicon in 1921, and can even be glimpsed in
Mary Shelley Mary Wollstonecraft Shelley (; ; 30 August 1797 – 1 February 1851) was an English novelist who wrote the Gothic fiction, Gothic novel ''Frankenstein, Frankenstein; or, The Modern Prometheus'' (1818), which is considered an History of scie ...
's ''
Frankenstein ''Frankenstein; or, The Modern Prometheus'' is an 1818 novel written by English author Mary Shelley. ''Frankenstein'' tells the story of Victor Frankenstein, a young scientist who creates a sapient creature in an unorthodox scientific ex ...
'' (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity. The word "robot" from ''R.U.R.'' comes from the Czech word, robota, meaning laborer or serf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.
HAL 9000 HAL 9000 is a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's ''Space Odyssey'' series. First appearing in the 1968 film '' 2001: A Space Odyssey'', HAL ( Heuristically programmed ALgorithmic computer) ...
(1968) and the original
Terminator Terminator may refer to: Science and technology Genetics * Terminator (genetics), the end of a gene for transcription * Terminator technology, proposed methods for restricting the use of genetically modified plants by causing second generation s ...
(1984) are two iconic examples of hostile AI in pop culture.


Contributing factors


Advantages of superhuman intelligence over humans

Nick Bostrom and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive
intelligence explosion The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the ...
where it would rapidly leave human intelligence far behind. Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans: * Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. * Strategizing: A superintelligence might be able to simply outwit human opposition. * Social manipulation: A superintelligence might be able to recruit human support, or covertly incite a war between humans. * Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the
Artificial General Intelligence Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fictio ...
(AGI) to run a copy of itself on their systems. * Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans.


Sources of AI advantage

According to Bostrom, a computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light. A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence". More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints on
working memory Working memory is a cognitive system with a limited capacity that can hold information temporarily. It is important for reasoning and the guidance of decision-making and behavior. Working memory is often used synonymously with short-term memory, ...
, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.


Possibility of unfriendly AI preceding friendly AI


Is strong AI inherently dangerous?

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo
instrumental convergence Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) m ...
in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to
Eliezer Yudkowsky Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American decision theory and artificial intelligence (AI) researcher and writer, best known for popularizing the idea of friendly artificial intelligence. He is a co-founder and research ...
, there is little reason to suppose that an artificially designed mind would have such an adaptation.


Odds of conflict

Many scholars, including evolutionary psychologist
Steven Pinker Steven Arthur Pinker (born September 18, 1954) is a Canadian-American cognitive psychologist, psycholinguist, popular science author, and public intellectual. He is an advocate of evolutionary psychology and the computational theory of mind. P ...
, argue that a superintelligent machine is likely to coexist peacefully with humans. The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal. According to AI researcher
Steve Omohundro Stephen Malvern Omohundro (born 1959) is an American computer scientist whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial int ...
, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources—would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans. Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as ''
The Matrix ''The Matrix'' is a 1999 science fiction action film written and directed by the Wachowskis. It is the first installment in ''The Matrix'' film series, starring Keanu Reeves, Laurence Fishburne, Carrie-Anne Moss, Hugo Weaving, and Joe Pantolia ...
'', arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence. In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film ''
I, Robot ''I, Robot'' is a fixup (compilation) novel of science fiction short stories or essays by American writer Isaac Asimov. The stories originally appeared in the American magazines ''Super Science Stories'' and '' Astounding Science Fiction'' be ...
'' and in the short story "
The Evitable Conflict "The Evitable Conflict" is a science fiction short story by American writer Isaac Asimov. It first appeared in the June 1950 issue of ''Astounding Science Fiction'' and subsequently appeared in the collections ''I, Robot'' (1950), ''The Complete ...
"). Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow
utility As a topic of economics, utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness as part of the theory of utilitarianism by moral philosopher ...
functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.


Precautions

The AI control problem is the issue of how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI. Major approaches to the control problem include ''alignment'', which aims to align AI goal systems with human values, and ''capability control'', which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "
AI box AI is artificial intelligence, intellectual ability in machines and robots. Ai, AI or A.I. may also refer to: Animals * Ai (chimpanzee), an individual experimental subject in Japan * Ai (sloth) or the pale-throated sloth, northern Amazonian ma ...
". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.


Warnings

Physicist Stephen Hawking,
Microsoft Microsoft Corporation is an American multinational technology corporation producing computer software, consumer electronics, personal computers, and related services headquartered at the Microsoft Redmond campus located in Redmond, Washing ...
founder
Bill Gates William Henry Gates III (born October 28, 1955) is an American business magnate and philanthropist. He is a co-founder of Microsoft, along with his late childhood friend Paul Allen. During his career at Microsoft, Gates held the positions ...
, and
SpaceX Space Exploration Technologies Corp. (SpaceX) is an American spacecraft manufacturer, launcher, and a satellite communications corporation headquartered in Hawthorne, California. It was founded in 2002 by Elon Musk with the stated goal of ...
founder
Elon Musk Elon Reeve Musk ( ; born June 28, 1971) is a business magnate and investor. He is the founder, CEO and chief engineer of SpaceX; angel investor, CEO and product architect of Tesla, Inc.; owner and CEO of Twitter, Inc.; founder of The Bori ...
have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race". Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting
financial markets A financial market is a market in which people trade financial securities and derivatives at low transaction costs. Some of the securities include stocks and bonds, raw materials and precious metals, which are known in the financial markets ...
, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking,
Max Tegmark Max Erik Tegmark (born 5 May 1967) is a Swedish-American physicist, cosmologist and machine learning researcher. He is a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute. He is also a scienti ...
, Elon Musk, Lord
Martin Rees Martin John Rees, Baron Rees of Ludlow One or more of the preceding sentences incorporates text from the royalsociety.org website where: (born 23 June 1942) is a British cosmologist and astrophysicist. He is the fifteenth Astronomer Royal, ...
,
Jaan Tallinn Jaan Tallinn (born 14 February 1972) is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/ Kazaa. Jaan Tallinn is a leading figure in the field ...
, and numerous AI researchers in signing the
Future of Life Institute The Future of Life Institute (FLI) is a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). The Institute's work is mad ...
's open letter speaking to the potential risks and benefits associated with
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech re ...
. The signatories "believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."


Prevention through AI alignment


See also

*
Artificial philosophy Artificial philosophy is a philosophical branch conceived by author Louis Molnarhttps://www.researchgate.net/publication/267156955_A_Step_Beyond_AI_Artificial_Philosophy "article: Frontiers in Artificial Intelligence and Applications", ResearchGat ...
*
Artificial intelligence arms race A military artificial intelligence arms race is a competition or arms race between two or more states to have their military forces equipped with the best artificial intelligence (AI). Since the mid-2010s many analysts have noted the emergence of s ...
*
Autonomous robot An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history th ...
**
Industrial robot An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes. Typical applications of robots include welding, painting, assembly, disassembly, pick a ...
**
Mobile robot A mobile robot is an automatic machine that is capable of locomotion.Hu, J.; Bhowmick, P.; Lanzon, A.,Group Coordinated Control of Networked Mobile Robots with Applications to Object Transportation IEEE Transactions on Vehicular Technology, 2021 ...
**
Self-replicating machine A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of ...
*
Cyberocracy In futurology, cyberocracy describes a hypothetical form of government that rules by the effective use of information. The exact nature of a cyberocracy is largely speculative as currently there have been no cyberocratic governments; however, a g ...
* Effective altruism *
Existential risk from artificial general intelligence Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the hum ...
*
Future of Humanity Institute The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and t ...
*
Global catastrophic risk A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanen ...
(existential risk) *
Government by algorithm Government by algorithm (also known as algorithmic regulation, regulation by algorithms, algorithmic governance, algocratic governance, algorithmic legal order or algocracy) is an alternative form of government or social ordering, where the usa ...
*
Human extinction Human extinction, also known as omnicide, is the hypothetical end of the human species due to either natural causes such as population decline from sub-replacement fertility, an asteroid impact, or large-scale volcanism, or to anthropogenic ...
*
Machine ethics Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherw ...
*
Machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
/
Deep learning Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. De ...
* Outline of transhumanism *
Self-replication Self-replication is any behavior of a dynamical system that yields construction of an identical or similar copy of itself. Biological cells, given suitable environments, reproduce by cell division. During cell division, DNA is replicated and ca ...
*
Technophobia Technophobia (from Greek τέχνη ''technē'', "art, skill, craft" and φόβος ''phobos'', "fear"), also known as technofear, is the fear or dislike of advanced technology or complex devices, especially computers. Although there are numerou ...
*
Technological singularity The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the m ...
**
Intelligence explosion The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the ...
**
Superintelligence A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent languag ...
*** '' Superintelligence: Paths, Dangers, Strategies''


Notes


References


External links


Automation, not domination: How robots will take over our world
(a positive outlook of robot and AI integration into society)
Machine Intelligence Research Institute
official MIRI (formerly Singularity Institute for Artificial Intelligence) website
Lifeboat Foundation AIShield
(To protect against unfriendly AI)
Ted talk: Can we build AI without losing control over it?
{{DEFAULTSORT:AI takeover Doomsday scenarios Future problems Existential risk from artificial general intelligence